278 research outputs found

    Convergence of Stochastic Processes

    Get PDF
    Often the best way to adumbrate a dark and dense assemblage of material is to describe the background in contrast to which the edges of the nebulosity may be clearly discerned. Hence, perhaps the most appropriate way to introduce this paper is to describe what it is not. It is not a comprehensive study of stochastic processes, nor an in-depth treatment of convergence. In fact, on the surface, the material covered in this paper is nothing more than a compendium of seemingly loosely-connected and barely-miscible theorems, methods and conclusions from the three main papers surveyed ([VC71], [Pol89] and [DL91])

    Feature-Based Localization Using Fixed Ultrasonic Transducers

    Get PDF
    We describe an approach for mobile robot localization based on geometric features extracted from ultrasonic data. As is well known, a single sonar measurement using a standard POLAROIDTM sensor, though yielding relatively accurate information regarding the range of a reflective surface patch, provides scant information about the location in azimuth or elevation of that patch. This lack of sufficiently precise localization of the reflective patch hampers any attempt at data association, clustering of multiple measurements or subsequent classification and inference. In previous work [15, 16] we proposed a multi-stage approach to clustering which aggregates sonic data accumulated from arbitrary transducer locations in a sequential fashion. It is computationally tractable and efficient despite the inherent exponential nature of clustering, and is robust in the face of noise in the measurements. It therefore lends itself to applications where the transducers are fixed relative to the mobile platform, where remaining stationary during a scan is both impractical and infeasible, and where deadreckoning errors can be substantial. In the current work we apply this feature extraction algorithm to the problem of localization in a partially known environment. Feature-based localization boasts advantages in robustness and speed over several other approaches. We limit the set of extracted features to planar surfaces. We describe an approach for establishing correspondences between extracted and map features. Once such correspondences have been established, a least squares approach to mobile robot pose estimation is delineated. It is shown that once correspondence has been found, the pose estimation may be performed in time linear in the number of extracted features. The decoupling of the correspondence matching and estimation stages is shown to offer advantages in speed and precision. Since the clustering algorithm aggregates sonic data accumulated from arbitrary transducer locations, there are no constraints on the trajectory to be followed for localization except that sufficiently large portions of features be ensonified to allow clustering. Preliminary experiments indicate the usefulness of the approach, especially for accurate estimation of orientation

    Stereo-Based Region-Growing using String Matching

    Get PDF
    We present a novel stereo algorithm based on a coarse texture segmentation preprocessing phase. Matching is performed using a string comparison. Matching sub-strings correspond to matching sequences of textures. Inter-scanline clustering of matching sub-strings yields regions of matching texture. The shape of these regions yield information concerning object's height, width and azimuthal position relative to the camera pair. Hence, rather than the standard dense depth map, the output of this algorithm is a segmentation of objects in the scene. Such a format is useful for the integration of stereo with other sensor modalities on a mobile robotic platform. It is also useful for localization; the height and width of a detected object may be used for landmark recognition, while depth and relative azimuthal location determine pose. The algorithm does not rely on the monotonicity of order of image primitives. Occlusions, exposures, and foreshortening effects are not problematic. The algorithm can deal with certain types of transparencies. It is computationally efficient, and very amenable to parallel implementation. Further, the epipolar constraints may be relaxed to some small but significant degree. A version of the algorithm has been implemented and tested on various types of images. It performs best on random dot stereograms, on images with easily filtered backgrounds (as in synthetic images), and on real scenes with uncontrived backgrounds

    Precision cluster mass determination from weak lensing

    Get PDF
    Weak gravitational lensing has been used extensively in the past decade to constrain the masses of galaxy clusters, and is the most promising observational technique for providing the mass calibration necessary for precision cosmology with clusters. There are several challenges in estimating cluster masses, particularly (a) the sensitivity to astrophysical effects and observational systematics that modify the signal relative to the theoretical expectations, and (b) biases that can arise due to assumptions in the mass estimation method, such as the assumed radial profile of the cluster. All of these challenges are more problematic in the inner regions of the cluster, suggesting that their influence would ideally be suppressed for the purpose of mass estimation. However, at any given radius the differential surface density measured by lensing is sensitive to all mass within that radius, and the corrupted signal from the inner parts is spread out to all scales. We develop a new statistic Ï’(R; R0) that is ideal for estimation of cluster masses because it completely eliminates mass contributions below a chosen scale (which we suggest should be about 20 per cent of the virial radius), and thus reduces sensitivity to systematic and astrophysical effects. We use simulated and analytical profiles including shape noise to quantify systematic biases on the estimated masses for several standard methods of mass estimation, finding that these can lead to significant mass biases that range from 10 to over 50 per cent. The mass uncertainties when using the new statistic Ï’(R; R0) are reduced by up to a factor of 10 relative to the standard methods, while only moderately increasing the statistical errors. This new method of mass estimation will enable a higher level of precision in future science work with weak lensing mass estimates for galaxy cluster

    Contending cultures of counterterrorism: transatlantic divergence or convergence?

    Get PDF
    Terrorist attacks on the United States, Spain and the United Kingdom have underlined the differing responses of Europe and the United States to the 'new terrorism'. This article analyses these responses through the prism of historically determined strategic cultures. For the last four years the United States has directed the full resources of a 'national security' approach towards this threat and has emphasized unilateralism. Europe, based on its own past experience of terrorism, has adopted a regulatory approach pursued through multilateralism. These divergences in transatlantic approaches, with potentially major implications for the future of the relationship, have appeared to be mitigated by a revised American strategy of counterterrorism that has emerged during 2005. However, this article contends that while strategic doctrines may change, the more immutable nature of strategic culture will make convergence difficult. This problem will be compounded by the fact that neither Europe nor America have yet addressed the deeper connections between terrorism and the process of globalization

    Precision cluster mass determination from weak lensing

    Full text link
    Weak gravitational lensing has been used extensively in the past decade to constrain the masses of galaxy clusters, and is the most promising observational technique for providing the mass calibration necessary for precision cosmology with clusters. There are several challenges in estimating cluster masses, particularly (a) the sensitivity to astrophysical effects and observational systematics that modify the signal relative to the theoretical expectations, and (b) biases that can arise due to assumptions in the mass estimation method, such as the assumed radial profile of the cluster. All of these challenges are more problematic in the inner regions of the cluster, suggesting that their influence would ideally be suppressed for the purpose of mass estimation. However, at any given radius the differential surface density measured by lensing is sensitive to all mass within that radius, and the corrupted signal from the inner parts is spread out to all scales. We develop a new statistic that is ideal for estimation of cluster masses because it completely eliminates mass contributions below a chosen scale (which we suggest should be about 20 per cent of the virial radius), and thus reduces sensitivity to systematic and astrophysical effects. We use simulated and analytical profiles to quantify systematic biases on the estimated masses for several standard methods of mass estimation, finding that these can lead to significant mass biases that range from ten to over fifty per cent. The mass uncertainties when using our new statistic are reduced by up to a factor of ten relative to the standard methods, while only moderately increasing the statistical errors. This new method of mass estimation will enable a higher level of precision in future science work with weak lensing mass estimates for galaxy clusters.Comment: 27 pages, 7 figures, submitted to MNRAS; v2 has expanded explanation for clarity, no change in results or conclusion

    Analysis of a distributed fiber-optic temperature sensor using single-photon detectors

    Get PDF
    We demonstrate a high-accuracy distributed fiber-optic temperature sensor using superconducting nanowire single-photon detectors and single-photon counting techniques. Our demonstration uses inexpensive single-mode fiber at standard telecommunications wavelengths as the sensing fiber, which enables extremely low-loss experiments and compatibility with existing fiber networks. We show that the uncertainty of the temperature measurement decreases with longer integration periods, but is ultimately limited by the calibration uncertainty. Temperature uncertainty on the order of 3 K is possible with spatial resolution of the order of 1 cm and integration period as small as 60 seconds. Also, we show that the measurement is subject to systematic uncertainties, such as polarization fading, which can be reduced with a polarization diversity receiver
    • …
    corecore